jeff han demos his breakthrough touchscreen

I'm really, really excited to be here today, because I'm about to show you some stuff that's just ready to come out of the lab, literally, and I'm really glad that you guys are going to be amongst the first to be able to see it in person, because I really, really think this is going to change—really change—the way we interact with machines from this point on.

Now, this is a rear-projected drafting table. It's about 36 inches wide and it's equipped with a multi-touch sensor. Now, normal touch sensors that you see, like on a kiosk or interactive whiteboards, can only register one point of contact at a time. This thing allows you to have multiple points at the same time. They can use both my hands; I can use chording actions; I can just go right up and use all 10 fingers if I wanted to. You know, like that.

Now, multi-touch sensing isn't completely new. I mean, people like Bill Buxton have been playing around with it in the '80s. However, the approach I built here is actually high-resolution, low-cost, and probably most importantly, very scalable. So, the technology, you know, isn't the most exciting thing here right now, other than probably its newfound accessibility. What's really interesting here is what you can do with it and the kind of interfaces you can build on top of it. So let's see.

So, for instance, we have a lava lamp application here. Now, you can see, I can use both of my hands to kind of squeeze together and put the blobs together. I can inject heat into the system here, or I can pull it apart with two of my fingers. It's completely intuitive; there's no instruction manual. The interface just kind of disappears. This started out as kind of a screensaver app that one of the Ph.D. students in our lab, Ilya Rosenberg, made. But I think its true identity comes out here.

Now what's great about a multi-touch sensor is that, you know, I could be doing this with as many fingers here, but of course multi-touch also inherently means multi-user. So Chris could be out here interacting with another part of Lava, while I kind of play around with it here. You can imagine a new kind of sculpting tool, where I'm kind of warming something up, making it malleable, and then letting it cool down and solidifying in a certain state. Google should have something like this in their lobby. (Laughter)

I'll show you something—a little more of a concrete example here, as this thing loads. This is a photographer's light box application. Again, I can use both of my hands to interact and move photos around. But what's even cooler is that if I have two fingers, I can actually grab a photo and then stretch it out like that really easily. I can pan, zoom and rotate it effortlessly. I can do that grossly with both of my hands, or I can do it just with two fingers on each of my hands together. If I grab the canvas, I can kind of do the same thing—stretch it out. I can do it simultaneously, where I'm holding this down, and gripping on another one, stretching this out like this.

Again, the interface just disappears here. There's no manual. This is exactly what you expect, especially if you haven't interacted with a computer before. Now, when you have initiatives like the $100 laptop, I kind of cringe at the idea that we're going to introduce a whole new generation of people to computing with this standard mouse-and-windows-pointer interface. This is something that I think is really the way we should be interacting with machines from this point on. (Applause) Now, of course, I can bring up a keyboard. And I can bring that around, put that up there. Now, obviously, this is kind of a standard keyboard, but of course I can rescale it to make it work well for my hands. And that's really important, because there's no reason in this day and age that we should be conforming to a physical device. That leads to bad things, like RSI. We have so much technology nowadays that these interfaces should start conforming to us. There's so little applied now to actually improving the way we interact with interfaces from this point on. This keyboard is probably actually the really wrong direction to go. You can imagine, in the future, as we develop this kind of technology, a keyboard that kind of automatically drifts as your hand moves away, and really intelligently anticipates which key you're trying to stroke with your hands. So—again, isn't this great?

Audience: Where's your lab?

Jeff Han: I'm a research scientist at NYU in New York.

Here's an example of another kind of app. I can make these little fuzz balls. It'll remember the strokes I'm making. Of course I can do it with all my hands. It's pressure-sensitive, you can notice. But what's neat about that is, again, I showed you that two-finger gesture that allows you to zoom in really quickly. Because you don't have to switch to a hand tool or the magnifying glass tool, you can just continuously make things in real multiple scales, all at the same time. I can create big things out here, but I can go back and really quickly go back to where I started, and make even smaller things here.

Now this is going to be really important as we start getting to things like data visualization. For instance, I think we all really enjoyed Hans Rosling's talk, and he really emphasized the fact that I've been thinking about for a long time too: we have all this great data, but for some reason, it's just sitting there. We're not really accessing it. And one of the reasons why I think that is, is because—we'll be helped by things like graphics and visualization and inference tools, but I also think a big part of it is going to be starting to be able to have better interfaces, to be able to drill down into this kind of data, while still thinking about the big picture here.

Let me show you another app here. This is something called WorldWind. It's done by NASA. It's a kind of—we've all seen Google Earth; this is an open-source version of that. There are plug-ins to be able to load in different data sets that NASA's collected over the years. But as you can see, I can use the same two-fingered gestures to go down and go in really seamlessly. There's no interface, again. It really allows anybody to kind of go in—and, it just does what you'd expect, you know? Again, there's just no interface here. The interface just disappears. I can switch to different data views. That's what's neat about this app here. There you go. NASA's really cool. They have these hyper-spectral images that are false-colored so you can—it's really good for determining vegetative use. Well, let's go back to this.

Now, the great thing about mapping applications—it's not really 2D, it's kind of 3D. So, again, with a multi-point interface, you can do a gesture like this—so you can be able to tilt around like that, you know. It's not just simply relegated to a kind of 2D panning and motion. Now, this gesture that we've developed, again, is just putting two fingers down—it's defining an axis of tilt—and I can tilt up and down that way. That's something we just came up with on the spot, you know; it's probably not the right thing to do, but there's such interesting things you can do with this kind of interface. It's just so much fun playing around with too. (Laughter)

And so the last thing I want to show you is—you know, I'm sure we can all think of a lot of entertainment apps that you can do with this thing. I'm a little more interested in the kind of creative applications we can do with this. Now, here's a simple application here—I can draw out a curve. And when I close it, it becomes a character. But the neat thing about it is I can add control points. And then what I can do is manipulate them with both of my fingers at the same time. And you notice what it does. It's kind of a puppeteering thing, where I can use as many fingers as I have to draw and make—

Now, there's a lot of actual math going on under here for this to control this mesh and do the right thing. I mean, this technique of being able to manipulate a mesh here, with multiple control points, is actually something that's state of the art. It was just released at Siggraph last year, but it's a great example of the kind of research I really love: all this compute power to apply to make things do the right things, intuitive things, to do exactly what you expect.

So, multi-touch interaction research is a very active field right now in HCI. I'm not the only one doing it; there are a lot of other people getting into it. This kind of technology is going to let even more people get into it, and I'm really looking forward to interacting with all you guys over the next few days and seeing how it can apply to your respective fields. Thank you. (Applause)